1. Flux: The primary image generation tool, combining text prompts, style references, and aspect ratio controls to produce custom visuals. 2. RealTime: A dynamic environment that enables live, instant updates to images as you adjust prompts or inputs. 3. Enhancer: A refinement tool to upscale images, adjust clarity, and add immersive effects professional-quality outputs. 4. Edit: A canvas-based tool for modifying, blending, and expanding images with precision. 5. Video Models: A suite of tools for creating high-quality videos tailored for storytelling and promotional use. 6. Training: A feature that allows users to upload datasets and train custom models for consistent stylistic outputs. Flux: Image Generation Overview Flux is the foundation of Krea’s image generation capabilities. It allows users to produce visuals by combining text prompts, style references, and aspect ratio adjustments. By integrating reference images and selecting specific models, users can fine-tune their results to achieve unique and deta outputs. Available Models Flux provides several models, each optimized for specific use case ● Flux (Default): The standard model, suitable for most general-purpose image generation tasks. ● Flux 1.1 Pro: Offers enhanced quality for more refined outp ● Flux 1.1 Pro Ultra: Focuses on ultra-high precision, ideal for intricate designs. ● Ideogram 2.0: Specializes in creating stylized artwork, suitable for creative and experimental projects. ● Ideogram 2.0 Turbo: A faster version of Ideogram, offering quicker generation times wi minimal compromises on quality. Workflo 1. Open a generative session by selecting the model you would like to use from the bottom-le corner of the screen.
2. Adjust the aspect ratio using the dedicated button on the le side of the text box. 3. Enter a description of your desired image in the prompt box labeled “describe a picture…”. Use structured prompts for precise results: a. Format*: [art style] [subject] [scene] [lighting] [color].* b. Example: A linocut illustration of a forest clearing, with so natural light and warm earthy tones. 4. Experiment with style references from the right sidebar by uploading images or selecting preset or trained styles like Cartoon, CGI, Concept, or Photo. Adjust the style weight slider to refine the outpu 5. Randomize prompts for inspiration or refine them iteratively for more specific outp Notes: ❖ Use Aesthetic Range to control stylistic variation. Lower values result in minimal variation, while higher values create dramatic shis in lighting, color grading, and camera angles. ❖ Download, save, or upscale the generated images using the green sparkle button. Customizing Outputs Users can upload up to three reference images to guide the generation process. Each reference image has an associated slider that allows users to control its influence on the final output. Additiona users can select from preloaded styles, such as Cartoon, CGI, Concept, Photo, and Flux Realtime, to further refine the aesthetic of their image RealTime: Dynamic Image Generation Overview Flux Realtime allows for instant AI-generated image updates based on text input, reference images, or external visual feeds. Users can dynamically modify prompts, integrate multiple references, and adjust AI strength to control stylistic influence in real-tim Workflo → Text-Based Generation: 1. Open a Flux Realtime session and start with a simple text prompt. 2. Modify the prompt dynamically—each change updates the image instantly. a. Example: “A linocut-style house with glowing windows.” 3. Adjust AI Strength for reference images:
a. Lower strength prioritizes text input, maintaining precise prompt execution. b. Higher strength blends reference images, creating abstract or style-driven results. 4. Experiment with style mixing (e.g., combining Cartoon + CGI) to explore artistic variations. → Image/Canvas Mode: 1. Switch to Canvas Mode for direct image-based generation. 2. Use tools from the le navigation bar: a. Select and Move: Adjust positioning of generated elements. b. Shapes (square, circle, triangle): Overlay geometric elements. c. Feather Tool: Generates a small AI image using a separate prompt, which can then be moved on the canvas. d. Paintbrush & Eraser: Customize visual elements with color, opacity, and texture adjustments. e. Background Customization: Choose from solid colors, textures, illustrations, or AI-generated backgrounds. 3. Upload existing images to guide AI outputs and use the Magic Wand Tool to highlight ideal areas for additions. 4. Save or iterate by blending multiple AI generations onto the same canvas. → Share Screen & Camera Integration 1. Share Screen Mode: ● Select a browser tab (e.g., Pinterest or a storyboard) as a visual input. ● The AI continuously updates based on what’s visible on-screen. ● Adjust AI Strength to control the balance between the screen feed and the text prompt. 2. Camera Mode: ● Use webcam input to generate real-time AI interpretations of the scene. ● AI recognizes shapes, colors, and motion but does not directly place users into the prompt scenario. ○ Adjust AI Strength: ■ Lower strength keeps more detail from the camera feed. ■ Higher strength abstracts the scene into shapes, textures, and colors. ■ Example: Prompting “a woman” while moving in front of the camera results in shiing colors and shapes rather than a literal recreation.
Enhancer: Upscaling and Refineme Overview The Enhancer tool in Krea is designed to improve image resolution, clarity, and detail while offering advanced customization options such as Scene Transfer, Clarity Adjustments, Sty Presets, and AI-guided Enhancements. Users can upscale images up to 8x, refine texture a lighting, and modify specific regions without affecting the entire composit Workflo 1. Upload an Image ● Select an existing image from your Krea history or upload a new image. ● The le sidebar provides quick access to previously generated assets for enhancement. 2. Choose an Upscale Factor ● 1x, 2x, 4x, 8x scaling options are available for resolution enhancement. ● Higher upscale values retain finer details but may introduce subtle AI modificat to textures and edges. 3. Refinement Adjustmen ● Strength (0-1): Controls how aggressively the AI enhances the image. ● Resemblance (0-1): Determines how closely the enhanced output adheres to the original image. ● Clarity (0-1): Adjusts sharpness and detail definitio ● Match Color Toggle (On/Off): Ensures enhanced results maintain the original col profil 4. Scene Transfer Mode ● Activating Scene Transfer allows users to modify the lighting, environment, and mood of an image. ● Upload a reference scene or enter a text-based scene prompt to guide the transformation. ○ Example: Applying “cinematic blue lighting” can give an image a high-contrast, film-inspired loo Enhance Presets for Stylistic Effec Choose from predefined enhancement presets to refine the out ○ Default: Standard AI refinement with balanced sharpness and clarit
○ Flat Sharp: Enhances texture and sharpness without altering color grading. ○ Strong: Increases contrast and detail intensity. ○ Reinterpretation: Allows AI to modify composition while enhancing resolution. ○ Oil Painting: Soens edges and adjusts textures for a painted effec ○ Digital Art: Enhances the image with a stylized, high-fidelity digital loo Prompt-Based Enhancements - Instead of manually adjusting settings, users can enter a text prompt in the Enhancer tool to guide refinement - Example: “Make this image appear as a photorealistic movie poster with sharp details and deep shadows.” Processing and Iteration: - Click Enhance to apply the refinement - Once generated, adjustments can be made, but further refinements require a n enhancement session rather than direct modifications to the existing enhanced imag - If an image has already been enhanced at 2x, applying another 1x enhancement will work on the 2x version, not the original, which may result in compounding AI interpretations. Edit: Image Transformation Overview Edit Mode provides a flexible, layer-based canvas for modifying, blending, and expanding images. supports precise transformations, making it ideal for fine-tuning AI-generated visuals, creati composites, and seamlessly extending images. Key Features & Tools 1. Select Tool a. Move, crop, rotate, and zoom into images for detailed adjustments. b. Supports layer-based organization, allowing independent transformations of added elements. 2. Change Region a. Use the Paint Tool or Shape Tool (rectangle/circle) to highlight areas for AI modification
b. The Magic Wand Tool automatically detects editable regions, suggesting areas where elements can be modified or adde 3. Cut Objects & Object Rearrangement a. AI detects cut paths for easily traceable objects, allowing drag-and-drop repositioning. b. Users can move, duplicate, or completely replace objects within an image. 4. Extend Frames (Outpainting): Best for landscape extensions, aspect ratio changes, or generating immersive scenes. 5. a. Expand an image’s canvas and use AI auto-completion to generate new areas. b. Can be used with or without prompts: i. Without a prompt → Krea fills in the missing area based on existing sce context. ii. With a prompt → Users can guide how the expansion should look. 6. Add & Blend Images a. Upload multiple images into the canvas and layer them together. b. Use the Paintbrush Tool to blend seams, refine edges, or adjust how elemen merge. c. Combining different AI-generated elements in Edit Mode allows for collage-li composition workflow Video Models Overview Krea’s AI video models generate animations tailored for storytelling, character motion, and promotional content. Each model has different strengths in frame generation, motion consistenc and character retention. Available Models:
| MODEL | FEATURES |
|---|---|
| Hunyuan | 512p or 720p resolution, 1-minute generation time. |
| Hailuo | Character consistency over multiple frames, 10-minute generation. |
| 01-Live | Supports start-frame uploads for realistic movement. |
|---|---|
| Luma | Flexible start and end frames, 1-minute generation. |
| Runway | Ideal for orientation-based animations. |
| Kling 1.6 | Short 5-10 second animations. |
| Kling 1.0 Pro | More detailed motion control & longer duration. |
The Training tool in Krea allows users to train AI models on custom datasets, ensuring consistency across projects. This is useful for brand identity, character design, and stylistic continuity. Steps to Train a Custom Style in Krea 1. Upload a Dataset a. Users must upload at least 3 images of the same art style, character, or object for AI training. b. Larger datasets (10-30 images) improve the model’s ability to generalize. 2. Generate a Style Code a. Once trained, Krea assigns a unique style code that can be applied to Flux, Edit, and Enhancer outputs. i. Example: Training on hand-painted watercolors will allow Krea to replicate that style on any input. 3. Apply & Refine the Sty ● Users can: ○ Apply the trained style to new generations. ○ Refine the model by uploading additional images for more accurac ○ Publish styles for broader application (optional). Additional Help Work Smarter, not Harder 1. Use clear, structured prompts: a. “A cyberpunk cityscape at night, neon reflections, cinematic lighting b. Break down subject, environment, mood, lighting, and color palette for more precise outputs. 2. Iterate with Multiple Variations a. Use the “Vary” button to generate slight modifications of an imag b. Apply different style references to explore variations before finaliz 3. Use Presets & Refinement Too
a. Leverage Enhancer presets (Default, Flat Sharp, Oil Painting, Digital Art) for quick refinement b. Use Scene Transfer for lighting adjustments without fully regenerating an image. 4. Combine Features for Best Results a. Generate an image in Flux, modify details in Edit Mode, then refine clarity Enhancer. b. Train a style in Custom Training, then apply it to multiple Flux generations for consistency. Best Practices for Edit Mode ● Use the Magic Wand Tool for quick AI-assisted selections when unsure where to modify. ● When extending frames, start with smaller increments to maintain coherence. ● Layer multiple AI-generated assets together to create complex, composited scenes. Best Practices for Training Jobs ● Curate a consistent dataset with uniform lighting, color balance, and resolution. ● Start with simpler styles (e.g., digital paintings, graphic designs) before training highly detailed textures. ● Keep refining the dataset over multiple iterations for improved result Best Practices for Using Enhancer ● For high-resolution projects (e.g., prints or detailed concept art), upscale in steps (2x → 4x) rather than jumping to 8x immediately. ● For stylistic consistency, use Scene Transfer or preset filters instead of manual tweakin ● For seamless blending, integrate enhanced images back into Edit Mode to refi composition edges or apply targeted modification Best Practices for Video Models ● Test different models with the same prompt to see which best fits the intended st ● Use keyframes strategically—longer keyframe sequences help AI understand motion better. ● Generate frames separately before animating complex scenes to ensure consistency.
Best Practices for Flux ● Structure Your Prompts: Break down prompts into clear categories like subject, style, scene, lighting, and colors. Example: “A linocut illustration of a forest clearing at sunset, with so natural light and earthy tones.” ● Experiment with Aesthetic Range: Use low values to keep outputs closer to the original prompt and high values for more creative variation in lighting, angles, and colors. ● Choose the Right Model: For polished outputs, use Flux 1.1 Pro Ultra. For stylized or experimental art, try Ideogram 2.0. Start with faster models like Flux (Default) for conceptual iterations. ● Combine Text Prompts with References: Use up to three reference images and adjust their strength sliders. Lower strength retains prompt details, while higher strength prioritizes the references. ● Use Style Mixing: Combine multiple preloaded styles (e.g., Cartoon + CGI) to explore unique visual aesthetics. ● Iterate with Vary and Regenerate: Use the “Vary” button to create slight modifications a refine the output progressivel Multi-Tool Workflow Exampl Step 1: Generate a Strong Base in Flux - Cra a structured prompt to define subject, style, and lightin - Use reference images to guide the AI toward a desired aesthetic. Step 2: Modify & Expand in Edit Mode - Extend canvas dimensions for wider compositions. - Replace objects or add missing elements using the Magic Wand tool. - Combine multiple AI-generated images for richer, more complex visuals. Step 3: Refine Quality in Enhanc - Increase sharpness, clarity, and detail. - Adjust Scene Transfer settings to unify lighting and colors - Apply style presets (e.g., Oil Painting, Digital Art) for final polis Step 4: Train Custom Styles for Long-Term Consistency
- If generating assets for a specific project or brand, train an AI model to reta stylistic coherence. - Use trained styles across multiple images, videos, or animations.